A Comparison between Ot and Hg from a Computational Perspective

نویسندگان

  • GIORGIO MAGRI
  • Jason Riggle
چکیده

Optimality Theory (OT) and Harmonic Grammar (HG) differ because the former assumes a model of constraint interaction based on strict domination, while the latter assumes a weighted model of interaction. As Prince and Smolensky (1997) admit, “that strict domination governs grammatical constraint interaction is not currently explained”. Yet, Legendre et al. (2006, 911-912) make two suggestions. The first suggestion is that OT’s strict domination might have algorithmic advantages, in the sense that it “may enable quick-and-dirty optimization algorithms [. . . ] to consistently find a single global [. . . ] optimum, whereas arbitrarily weighted constraints typically lead such algorithms to produce widely varying solutions, each only a local optimum.” The second suggestion is that OT’s strict domination might have learnability advantages: “another possibility is that demands of learnability provide a pressure for strict domination among constraints”, although they note that “it remains an open problem to formally characterize exactly what is essential about strict domination to guarantee efficient learning.”

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

HG has no computational advantages over OT: consequences for the theory of OT online algorithms

Various authors have recently endorsed Harmonic Grammar (HG) as a replacement of Optimality Theory (OT). One argument for this move is based on computational considerations: OT looks prima facie like an exotic framework with no correspondent in Machine Learning, and the replacement with HG allows methods and results form Machine Learning to be imported within Computational Phonology; see for in...

متن کامل

HG Has No Computational Advantages over OT

The peculiar property of Optimality Theory (OT) is that it uses constraint ranking and thus enforces strict domination, according to which the highest ranked relevant constraint “takes it all”; see Prince & Smolensky (2004). Because of this property, OT looks prima facie like an exotic combinatorial framework. Exotic in the sense that it does not seem to have any close correspondent within core...

متن کامل

Noise robustness and stochastic tolerance of OT error-driven ranking algorithms

Recent counterexamples show that Harmonic Grammar (HG) error-driven learning (with the classical Perceptron reweighing rule) is not robust to noise and does not tolerate the stochastic implementation (Magri 2014, MS). This article guarantees that no analogous counterexamples are possible for proper Optimality Theory (OT) error-driven learners. In fact, a simple extension of the OT convergence a...

متن کامل

Error-driven Learning in Ot and Hg: a Comparison

The OT error-driven learner is known to admit guarantees of efficiency, stochastic tolerance and noise robustness which hold independently of any substantive assumptions on the constraints. This paper shows that the HG learner instead does not admit such constraint-independent guarantees. The HG theory of error-driven learning thus needs to be substantially restricted to specific constraint sets.

متن کامل

Error-driven Learning in Harmonic Grammar

The HG literature has adopted so far the Perceptron reweighing rule because of its convergence guarantees. Yet, this rule is not suited to HG, as it fails at ensuring non-negativity of the weights. The first contribution of this paper is a solution to this impasse. I consider a variant of the Perceptron which truncates any update at zero, thus maintaining the weights non-negative in a principle...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011